update cachex and cachex provider#5012
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactors the Cachex provider to use Cachex.fetch/expire semantics with ETS-backed per-cache TTLs, removes the Unlocker GenServer, adds a global Cachex provider, bumps Cachex to ~> 4.0, expands dataloader queryables and project resolvers, and greatly increases cache-related tests and concurrency coverage. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant CacheProvider
participant Cachex
participant ComputeFunc
Caller->>CacheProvider: get_or_store(cache, key, func)
CacheProvider->>Cachex: Cachex.fetch(key, fetch_callback)
alt cache hit
Cachex-->>CacheProvider: {:ok, compressed}
CacheProvider-->>Caller: decompress -> {:ok, value}
else cache miss
Cachex->>CacheProvider: invoke fetch_callback(context)
CacheProvider->>ComputeFunc: execute func()
alt {:nocache, v} or {:middleware, ...}
ComputeFunc-->>CacheProvider: {:nocache|:middleware, v}
CacheProvider-->>Caller: return v (no cache write)
else returns value
ComputeFunc-->>CacheProvider: value
CacheProvider->>Cachex: commit compressed value with expire: ttl
Cachex-->>CacheProvider: {:ok, committed}
CacheProvider-->>Caller: decompress -> {:ok, value}
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
0f33839 to
d329181
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mix.exs`:
- Line 52: Update the Cachex dependency tuple in mix.exs from {:cachex, "~>
4.0"} to {:cachex, "~> 4.1"} to explicitly target the latest minor series; after
changing the version string, run mix deps.get (and optionally mix deps.unlock
--check-unused or mix deps.update cachex) to refresh the lockfile so the project
uses 4.1.x.
There was a problem hiding this comment.
Pull request overview
This pull request updates Cachex from version 3.x to 4.1.1 and refactors the cache provider to use public APIs instead of private ones. It also adds comprehensive test coverage for the Cachex provider and introduces a new notifications stats feature for app notifications.
Changes:
- Updated Cachex dependency from ~> 3.4 to ~> 4.0, removing the custom locking mechanism (Unlocker module) and replacing it with Cachex's built-in Courier service for request deduplication
- Added extensive test coverage for CachexProvider covering concurrent access, error handling, and edge cases
- Added per-type unread notification counts (stats) to the app notifications API with optional cursor filtering
Reviewed changes
Copilot reviewed 4 out of 5 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| mix.exs | Updated Cachex dependency to ~> 4.0 |
| mix.lock | Locked Cachex to 4.1.1 and added ex_hash_ring dependency |
| lib/sanbase_web/graphql/cache/cachex_provider.ex | Refactored to use Cachex.fetch with Courier service, replaced ttl: with expire:, removed custom locking |
| lib/sanbase_web/graphql/cache/cachex_unlocker.ex | Removed entire custom unlocker module (no longer needed) |
| test/sanbase_web/cache/cachex_provider_test.exs | Added comprehensive test coverage including concurrent access patterns |
| lib/sanbase/app_notifications/app_notifications.ex | Added get_notifications_stats/2 function for per-type unread counts |
| lib/sanbase_web/graphql/schema/types/app_notification_types.ex | Added stats field to app_notifications_paginated |
| lib/sanbase_web/graphql/resolvers/app_notification_resolver.ex | Updated resolver to include stats in response |
| test/sanbase_web/graphql/app_notifications/app_notification_api_test.exs | Added test for stats functionality with cursor filtering |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/sanbase_web/cache/cachex_provider_test.exs (1)
9-14:⚠️ Potential issue | 🟡 Minor
Process.exit(pid, :normal)is a no-op from a different process — use:shutdownorGenServer.stop.
on_exitruns in a separate process. Sending a:normalexit signal from another process to a GenServer that doesn't trap exits is silently ignored per OTP semantics. In practice, ExUnit terminates the test process (which is linked viastart_link) with:shutdown, which propagates through the link and cleans up the Cachex process — so this likely works incidentally. Still, the explicit cleanup is misleading.Suggested fix
setup do {:ok, pid} = CacheProvider.start_link(name: `@cache_name`, id: `@cache_id`) - on_exit(fn -> Process.exit(pid, :normal) end) + on_exit(fn -> if Process.alive?(pid), do: GenServer.stop(pid) end) :ok end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/sanbase_web/cache/cachex_provider_test.exs` around lines 9 - 14, The on_exit callback currently calls Process.exit(pid, :normal) which is a no-op when invoked from a different process; replace that explicit exit with a proper shutdown such as Process.exit(pid, :shutdown) or call GenServer.stop(pid) to ensure the CacheProvider process started by CacheProvider.start_link(name: `@cache_name`, id: `@cache_id`) is cleaned up reliably; update the on_exit to use one of those options (prefer GenServer.stop(pid) if you want a graceful stop) and remove the misleading :normal exit.
🧹 Nitpick comments (2)
lib/sanbase_web/graphql/cache/cachex_provider.ex (2)
7-10: Missing@specand@docon public callback functions.Per coding guidelines, all public functions should have
@specand@docannotations. While the@impltag signals these are behaviour callbacks, adding typespecs would still benefit tooling (Dialyzer) and readability. Low-priority given these are well-understood callbacks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider.ex` around lines 7 - 10, Add missing `@spec` and `@doc` for the public callback function(s) implemented in this module (e.g., start_link/1) so they conform to the project's guidelines; update the SanbaseWeb.Graphql.CachexProvider module by adding a short `@doc` describing the behavior/contract of start_link/1 and an appropriate `@spec` (matching the callback in SanbaseWeb.Graphql.CacheProvider) to the function implementation annotated with `@impl`, ensuring the types align with Cachex.start_link/1 and the behaviour's callback types.
82-97: Cachex handles exceptions gracefully—explicit rescue is optional.The fallback function raised inside
Cachex.fetchis caught by Cachex's internal Courier process and transformed into an{:error, %Cachex.Error{...}}tuple, which the existing case clause on line 85 already handles. Adding an explicitrescueclause would be defensive but redundant since Cachex guarantees this behavior for all concurrent waiters. The current code respects the library's documented contract and remains clean without additional nesting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider.ex` around lines 82 - 97, Do not add an explicit rescue around the fallback passed to Cachex.fetch; rely on Cachex's internal Courier to catch exceptions and return {:error, %Cachex.Error{...}}, which is already handled by the existing case branch matching {:error, _}; leave the Cachex.fetch block (the anonymous function using func.(), the {:ok, _} -> compress_value(ok_tuple), {:error, _} -> {:ignore, error}, {:nocache, value} -> {:ignore, {:nocache, value}}, and {:middleware, _, _} -> {:ignore, {:middleware, cache_modify_middleware.(cache, key, tuple)}}) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/sanbase_web/cache/cachex_provider_test.exs`:
- Around line 455-485: The fallback passed into Cachex.fetch inside
CacheProvider.get_or_store must explicitly catch exceptions so spawned callers
don't crash; update the anonymous fallback (the function given to Cachex.fetch
in get_or_store) to wrap the existing pattern-matching return logic in a rescue
clause that converts any raised Exception into a safe {:ignore, {:error,
Exception.message(e)}} (or similar error tuple) so Cachex receives an ignore
result and all concurrent callers get {:error, _} rather than letting the
exception propagate; keep existing branches for {:ok, _}, {:error, _},
{:nocache, _} and {:middleware, _, _} and only handle unexpected exceptions in
the rescue.
---
Outside diff comments:
In `@test/sanbase_web/cache/cachex_provider_test.exs`:
- Around line 9-14: The on_exit callback currently calls Process.exit(pid,
:normal) which is a no-op when invoked from a different process; replace that
explicit exit with a proper shutdown such as Process.exit(pid, :shutdown) or
call GenServer.stop(pid) to ensure the CacheProvider process started by
CacheProvider.start_link(name: `@cache_name`, id: `@cache_id`) is cleaned up
reliably; update the on_exit to use one of those options (prefer
GenServer.stop(pid) if you want a graceful stop) and remove the misleading
:normal exit.
---
Nitpick comments:
In `@lib/sanbase_web/graphql/cache/cachex_provider.ex`:
- Around line 7-10: Add missing `@spec` and `@doc` for the public callback
function(s) implemented in this module (e.g., start_link/1) so they conform to
the project's guidelines; update the SanbaseWeb.Graphql.CachexProvider module by
adding a short `@doc` describing the behavior/contract of start_link/1 and an
appropriate `@spec` (matching the callback in SanbaseWeb.Graphql.CacheProvider) to
the function implementation annotated with `@impl`, ensuring the types align with
Cachex.start_link/1 and the behaviour's callback types.
- Around line 82-97: Do not add an explicit rescue around the fallback passed to
Cachex.fetch; rely on Cachex's internal Courier to catch exceptions and return
{:error, %Cachex.Error{...}}, which is already handled by the existing case
branch matching {:error, _}; leave the Cachex.fetch block (the anonymous
function using func.(), the {:ok, _} -> compress_value(ok_tuple), {:error, _} ->
{:ignore, error}, {:nocache, value} -> {:ignore, {:nocache, value}}, and
{:middleware, _, _} -> {:ignore, {:middleware, cache_modify_middleware.(cache,
key, tuple)}}) unchanged.
d329181 to
fd5811e
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
lib/sanbase_web/graphql/cache/cachex_provider_global.ex (1)
1-153: Significant code duplication withCachexProvider.Nearly every function except
get_or_store/4andexecute_cache_miss/4is identical betweenCachexProviderandCachexProviderGlobal(opts/1,size/1,count/1,clear_all/1,get/2,store/3,cache_item/3,true_key/1,compress_value/1,decompress_value/1). Consider extracting the shared logic into a common module to reduce maintenance burden.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider_global.ex` around lines 1 - 153, The two modules duplicate most logic; extract the shared functions (opts/1, size/1, count/1, clear_all/1, get/2, store/3, cache_item/3, true_key/1, compress_value/1, decompress_value/1) into a new common module (e.g., SanbaseWeb.Graphql.CachexCommon or CachexProvider.Shared) and have SanbaseWeb.Graphql.CachexProvider and SanbaseWeb.Graphql.CachexProviderGlobal delegate to or import that module for those functions, keeping only get_or_store/4 and execute_cache_miss/4 (and the global locking logic with `@global_lock_nodes` and lock_key) in CachexProviderGlobal so its global trans semantics remain intact. Ensure child_spec/start_link still call opts/1 from the common module and update any fully-qualified references if needed.lib/sanbase_web/graphql/resolvers/project/project_resolver.ex (1)
113-214: Missing@docand@specon new public functions.Per coding guidelines, public functions should have
@docdocumentation and@spectype specifications. The existing functions in this module also lack these, so this is pre-existing — but worth noting for the new additions. As per coding guidelines: "Add '@doc' documentation to all public functions" and "Add typespecs (@spec) to all public functions".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/resolvers/project/project_resolver.ex` around lines 113 - 214, All newly added public resolver functions (project_main_contract_address, project_contract_addresses, eth_addresses_resolver_fun, social_volume_query, source_slug_mappings, market_segment, market_segments, project_market_segment_tags) are missing `@doc` and `@spec`; add a short `@doc` for each describing purpose and return value, and add `@spec` type specifications matching their signatures (accepting a Project struct, args map, and Absinthe context/resolution) and the actual return shape {:ok, ...} (e.g., string or nil, list of structs, list of strings, etc.) so the docs and types reflect the existing return values in each function.lib/sanbase_web/graphql/schema/types/project_types.ex (1)
462-463: Optional: renameeth_addresses_resolver_fun/3— the_funsuffix is superfluous.Every peer resolver in this file follows a plain
<noun>/3or<verb_noun>/3pattern. The_funsuffix implies this was a variable name for a lambda rather than a named function, making it a readability outlier. A name likeeth_addresses/3orproject_eth_addresses/3would be consistent with the surrounding resolver function names.✏️ Suggested rename
- cache_resolve(&ProjectResolver.eth_addresses_resolver_fun/3) + cache_resolve(&ProjectResolver.eth_addresses/3)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/schema/types/project_types.ex` around lines 462 - 463, Rename the resolver function ProjectResolver.eth_addresses_resolver_fun/3 to a clearer name (e.g. eth_addresses/3 or project_eth_addresses/3) and update all references (for example the cache_resolve call that currently uses &ProjectResolver.eth_addresses_resolver_fun/3) to point to the new function name; also rename the function definition inside the ProjectResolver module to the new identifier and run the test/compile to ensure no remaining references to eth_addresses_resolver_fun/3 remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/sanbase_web/graphql/cache/cachex_provider_global.ex`:
- Line 4: The module attribute `@global_lock_nodes` uses node() at compile time
which fixes it to the build node and breaks distributed locking; change any use
of `@global_lock_nodes` (and the attribute itself) so the current runtime node is
computed when acquiring the lock (e.g., replace the attribute with a function or
inline call that calls node() at runtime) and pass that runtime-derived node
list into :global.trans (ensure the code that calls :global.trans references the
new runtime helper or calls node() directly).
In `@lib/sanbase_web/graphql/cache/cachex_provider.ex`:
- Around line 87-89: Remove the misleading Process.put call in the Cachex
fallback clause: inside the {:nocache, value} branch of the fallback in
lib/sanbase_web/graphql/cache/cachex_provider.ex (the clause that currently does
Process.put(:do_not_cache_query, true) and returns {:ignore, {:nocache,
value}}), delete the Process.put(:do_not_cache_query, true) line because Cachex
runs the fallback in a Courier worker so this sets the flag in the worker, not
the caller; rely on the existing caller-side flag setting in the caller logic
around the execute_cache_miss/fetch handling instead.
- Around line 96-115: The clause matching {:error, %Cachex.Error{message:
message}} is dead because Cachex safe APIs return {:error, atom}; replace that
pattern with a single clause that matches {:error, error_atom} and convert it to
a readable message via Cachex.Error.long_form(error_atom) (e.g. return {:error,
Cachex.Error.long_form(error_atom)}), and remove or consolidate the separate
{:error, _} = error clause so all error atoms from Cachex.get/fetch/etc. are
handled consistently; keep the existing {:commit, ...}, {:ok, ...}, {:ignore,
...} branches and the calls to decompress_value and Process.put as-is.
---
Duplicate comments:
In `@test/sanbase_web/cache/cachex_provider_test.exs`:
- Around line 455-485: The test can hang if the worker process crashes, so
modify the concurrent caller spawning to monitor child processes (use
spawn_monitor or Task.async/Task.await) and handle their DOWN messages: for each
spawned caller that crashes, treat the {:DOWN, _} notification as an {:error, _}
result so the test collects a result per caller; keep using
CacheProvider.get_or_store to invoke the failing function and still assert that
all collected results match {:error, _} and that CacheProvider.get(`@cache_name`,
:concurrent_raise_key) returns nil.
---
Nitpick comments:
In `@lib/sanbase_web/graphql/cache/cachex_provider_global.ex`:
- Around line 1-153: The two modules duplicate most logic; extract the shared
functions (opts/1, size/1, count/1, clear_all/1, get/2, store/3, cache_item/3,
true_key/1, compress_value/1, decompress_value/1) into a new common module
(e.g., SanbaseWeb.Graphql.CachexCommon or CachexProvider.Shared) and have
SanbaseWeb.Graphql.CachexProvider and SanbaseWeb.Graphql.CachexProviderGlobal
delegate to or import that module for those functions, keeping only
get_or_store/4 and execute_cache_miss/4 (and the global locking logic with
`@global_lock_nodes` and lock_key) in CachexProviderGlobal so its global trans
semantics remain intact. Ensure child_spec/start_link still call opts/1 from the
common module and update any fully-qualified references if needed.
In `@lib/sanbase_web/graphql/resolvers/project/project_resolver.ex`:
- Around line 113-214: All newly added public resolver functions
(project_main_contract_address, project_contract_addresses,
eth_addresses_resolver_fun, social_volume_query, source_slug_mappings,
market_segment, market_segments, project_market_segment_tags) are missing `@doc`
and `@spec`; add a short `@doc` for each describing purpose and return value, and
add `@spec` type specifications matching their signatures (accepting a Project
struct, args map, and Absinthe context/resolution) and the actual return shape
{:ok, ...} (e.g., string or nil, list of structs, list of strings, etc.) so the
docs and types reflect the existing return values in each function.
In `@lib/sanbase_web/graphql/schema/types/project_types.ex`:
- Around line 462-463: Rename the resolver function
ProjectResolver.eth_addresses_resolver_fun/3 to a clearer name (e.g.
eth_addresses/3 or project_eth_addresses/3) and update all references (for
example the cache_resolve call that currently uses
&ProjectResolver.eth_addresses_resolver_fun/3) to point to the new function
name; also rename the function definition inside the ProjectResolver module to
the new identifier and run the test/compile to ensure no remaining references to
eth_addresses_resolver_fun/3 remain.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/sanbase_web/graphql/cache/cachex_provider.ex`:
- Around line 167-182: The catch clause in ensure_opts_ets/0 doesn't match the
actual exception raised by :ets.new/2 and can let a race-condition error escape;
update the try/catch to handle the Erlang :badarg atom (use catch :error,
:badarg -> :ok) instead of expecting a tuple or Elixir struct, so when
:ets.new(:sanbase_graphql_cachex_opts, ...) fails because the table already
exists the code swallows that specific error and continues; keep the rest of the
function unchanged and only adjust the catch clauses in ensure_opts_ets/0 to
match :badarg.
In `@test/sanbase_web/cache/cachex_provider_test.exs`:
- Around line 546-572: The test "uses custom reclaim when provided and limit
hook prunes to max_entries minus reclaim" is timing-sensitive: the fixed
Process.sleep(250) may not allow Cachex.Limit.Scheduled (configured via
CacheProvider.start_link with limit_check_interval_ms: 100) to run on slow CI,
causing flaky failures; update the test to either increase the sleep budget
(e.g., to ~1_000ms) or change the assertion on CacheProvider.count(name) to
accept a range (e.g., <= expected_after_reclaim or within a small window) so the
test no longer requires an exact count at a specific millisecond, ensuring
stability across environments.
---
Duplicate comments:
In `@lib/sanbase_web/graphql/cache/cachex_provider.ex`:
- Around line 132-133: The pattern matching clause {:error,
%Cachex.Error{message: message}} in the cache handling code never matches
because Cachex.get/3 and Cachex.fetch/3 return {:error, atom} for safe APIs;
replace the dead pattern with a clause that matches {:error, err_atom} and
convert it to a readable message via Cachex.Error.long_form(err_atom) (e.g. in
the function that currently contains the {:error, %Cachex.Error{...}} clause,
update the match to use {:error, err} -> {:error, Cachex.Error.long_form(err)}).
- Around line 116-118: The Process.put(:do_not_cache_query, true) call inside
the Cachex.fetch fallback (the clause matching {:nocache, value}) sets the flag
in the worker process and is ineffective; remove that Process.put call from the
{:nocache, value} branch in lib/sanbase_web/graphql/cache/cachex_provider.ex so
the fallback simply returns {:ignore, {:nocache, value}} and let the existing
caller-side clause that inspects {:nocache, value} handle the no-cache behavior.
In `@test/sanbase_web/cache/cachex_provider_test.exs`:
- Around line 455-485: The test relies on exceptions raised in the fallback
being converted to {:error, _} for all waiting callers; to make this
deterministic, update CacheProvider.get_or_store so the computation passed into
Cachex.fetch (the fallback used for :concurrent_raise_key scenarios) wraps the
user computation in a try/rescue and returns a normalized error tuple instead of
letting the exception propagate—i.e., catch the exception in the fallback inside
CacheProvider.get_or_store (or the helper it uses) and return an {:error,
reason} value to CacheProvider.get_or_store callers (ensuring CacheProvider.get
still returns nil after a failure).
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
lib/sanbase_web/graphql/cache/cachex_provider.ex (1)
116-118:Process.put(:do_not_cache_query, true)runs in the Cachex Courier worker process, not the caller.This flag is set inside the
Cachex.fetchfallback, which Cachex executes in a separate Courier process. TheProcess.puton line 117 modifies the Courier worker's process dictionary—not the caller's. The caller-side handling on lines 140–142 correctly sets the flag again, so line 117 is dead code. Consider removing it to avoid confusion.Proposed fix
{:nocache, value} -> - Process.put(:do_not_cache_query, true) {:ignore, {:nocache, value}}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider.ex` around lines 116 - 118, Remove the dead Process.put call inside the Cachex.fetch fallback: when matching {:nocache, value} in the fallback branch (the clause that currently calls Process.put(:do_not_cache_query, true) and returns {:ignore, {:nocache, value}}), delete the Process.put(:do_not_cache_query, true) invocation and leave the branch to simply return {:ignore, {:nocache, value}} because the caller-side handling (the logic around the caller that sets the flag again on lines ~140–142) already sets the :do_not_cache_query flag in the caller process.
🧹 Nitpick comments (7)
lib/sanbase_web/graphql/cache/cachex_provider_global.ex (4)
1-5: Missing@docand@specon public functions.Same as
CachexProvider— none of the public callback implementations have documentation or typespecs. As per coding guidelines, all public functions should have@docand@spec.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider_global.ex` around lines 1 - 5, Public callback implementations in the SanbaseWeb.Graphql.CachexProviderGlobal module lack `@doc` and `@spec` annotations; add a short `@doc` string and appropriate `@spec` for each public function that implements the SanbaseWeb.Graphql.CacheProvider behaviour (match the callback types defined in SanbaseWeb.Graphql.CacheProvider), e.g., for functions like get/2, put/3, delete/2, etc., mirror the docs and types used in CachexProvider to ensure consistency and compile-time type checks.
147-151::erlang.binary_to_term/1— prefer the:safeoption for defense-in-depth.While the data is self-produced via
compress_value, using:erlang.binary_to_term(data, [:safe])guards against atom-table exhaustion if corrupted or tampered data ever enters the cache (e.g., via a shared ETS table or external Cachex store).Proposed fix
defp decompress_value(value) do value |> :zlib.gunzip() - |> :erlang.binary_to_term() + |> :erlang.binary_to_term([:safe]) endThe same applies to
CachexProvider.decompress_value/1(line 206).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider_global.ex` around lines 147 - 151, The decompress functions currently call :erlang.binary_to_term/1 which can create atoms from untrusted data; update both decompress_value functions (the local one and CachexProvider.decompress_value) to call :zlib.gunzip() piped into :erlang.binary_to_term(..., [:safe]) instead of the single-argument form, preserving the same pipeline and return behavior so corrupted/tampered data cannot exhaust the atom table.
103-128:execute_cache_miss/4case statement has no fallback clause for unexpectedfunc.()return values.If
func.()returns a value that doesn't match{:ok, _},{:error, _},{:nocache, _}, or{:middleware, _, _}, this will raise aCaseClauseErrorinside:global.trans, which will propagate to the caller. A catch-all clause would make this more resilient.Proposed fix
{:middleware, _middleware_module, _args} = tuple -> cache_modify_middleware.(cache, key, tuple) + + other -> + other endNote: the same gap exists in
CachexProvider.get_or_store/4(lines 109–122) where theCachex.fetchfallback also lacks a catch-all in the innercase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider_global.ex` around lines 103 - 128, The case in execute_cache_miss/4 currently matches only {:ok,_}, {:error,_}, {:nocache,_} and {:middleware,_,_} and will crash on any other func.() return; add a final fallback clause (e.g. catch_all -> handle unexpected result by logging or returning an {:error, {:unexpected_cache_result, catch_all}} tuple) so the function never raises CaseClauseError. Update the branch to consistently either treat unexpected returns as errors or pass them through safely (use the same handling approach in the inner case inside CachexProvider.get_or_store/4 where Cachex.fetch's result is pattern-matched) and reference cache_item/3 and cache_modify_middleware.(...) paths to preserve existing behavior for known tuples.
29-45: Unhandled{:error, _}fromCachex.inspect/2andCachex.size/2.Same as in
CachexProvider—size/1andcount/1will crash withMatchErrorif the cache is unavailable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider_global.ex` around lines 29 - 45, size/1 and count/1 currently pattern-match only on {:ok, ...} from Cachex.inspect/2 and Cachex.size/2 and will crash on {:error, _}; update these functions (size/1 and count/1) to handle both {:ok, value} and {:error, reason} returns (e.g. return {:ok, rounded_mb} on success and {:error, reason} on failure) and likewise update clear_all (which calls Cachex.clear/1) to handle {:error, reason} instead of assuming {:ok, _}; avoid using bare pattern matches that cause MatchError and return a consistent {:ok, ...} / {:error, ...} shape.lib/sanbase_web/graphql/cache/cachex_provider.ex (3)
176-192: Redundant catch clauses —:ets.new/2only raises:error, :badarg.Line 183 correctly catches
:error, :badarg, but lines 182 ({:badarg, _}) and 184 (%ArgumentError{}) will never match an:ets.new/2failure. They're harmless dead code but add confusion.Proposed fix
try do :ets.new(:sanbase_graphql_cachex_opts, [:named_table, :public, :set]) catch - :error, {:badarg, _} -> :ok :error, :badarg -> :ok - :error, %ArgumentError{} -> :ok end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider.ex` around lines 176 - 192, The catch block in ensure_opts_ets/0 includes impossible patterns; simplify it by removing the redundant clauses and only catching the actual error raised by :ets.new/2. Locate ensure_opts_ets and its try..catch around :ets.new(:sanbase_graphql_cachex_opts, ...) and replace the multiple catch clauses with a single clause that matches :error, :badarg (i.e., catch :error, :badarg -> :ok) so only the real failure mode is handled and the dead patterns ({:badarg, _} and %ArgumentError{}) are removed.
1-8: Missing@docand@specon all public functions.None of the public callback functions (
start_link,child_spec,size,count,clear_all,get,store,get_or_store) have@docor@specannotations. As per coding guidelines, all public functions should have both.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider.ex` around lines 1 - 8, Add missing `@doc` and `@spec` annotations for all public callback functions in SanbaseWeb.Graphql.CachexProvider: start_link/1, child_spec/1, size/1, count/1, clear_all/1, get/2, store/3, and get_or_store/3. For each function add a brief `@doc` describing its purpose and an accurate `@spec` reflecting the expected argument and return types (e.g., start_link(opts :: keyword()) :: {:ok, pid()} | {:error, term()}, child_spec(opts :: keyword()) :: Supervisor.child_spec(), size/count/clear_all should accept the cache name/type and return integer/ok tuples, get(key, cache) and store/3/get_or_store/3 should use key :: term(), value :: term(), ttl/type as appropriate and return {:ok, term()} | :not_found | {:error, term()}), matching the CacheProvider behaviour callbacks so Dialyzer and the behaviour contract are satisfied.
58-68: Unhandled error fromCachex.inspect/2andCachex.size/2.Both
size/1andcount/1pattern-match on{:ok, _}without handling{:error, _}. If the cache is not running, these will crash with aMatchError.Proposed fix (example for size)
def size(cache) do - {:ok, bytes_size} = Cachex.inspect(cache, {:memory, :bytes}) - (bytes_size / (1024 * 1024)) |> Float.round(2) + case Cachex.inspect(cache, {:memory, :bytes}) do + {:ok, bytes_size} -> (bytes_size / (1024 * 1024)) |> Float.round(2) + {:error, _} = error -> error + end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/sanbase_web/graphql/cache/cachex_provider.ex` around lines 58 - 68, The size/1 and count/1 functions (in module SanbaseWeb.Graphql.CachexProvider) currently assume Cachex.inspect(cache, {:memory, :bytes}) and Cachex.size(cache) always return {:ok, _} and will crash on {:error, _}; update both functions to pattern-match both {:ok, value} and {:error, reason} results from Cachex.inspect/2 and Cachex.size/2 and return an explicit {:ok, result} on success (e.g., for size convert bytes to MB and wrap in {:ok, mb}) and {:error, reason} on failure so callers don’t get a MatchError when the cache isn’t running.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/sanbase_web/graphql/cache/cachex_provider.ex`:
- Around line 162-174: The rescue block in the private function
default_ttl_seconds/1 uses Logger.error but the module does not require the
Logger macros; add a module-level require Logger near the top of the module (in
the same module that defines default_ttl_seconds) so Logger.error/1 compiles;
locate the module declaration that contains default_ttl_seconds/1 and insert
require Logger there.
- Around line 132-138: The code calls Cachex.Error.long_form(error) in the
:error and :ignore branches but that function expects an atom and will crash for
string or other error values; add a small helper (e.g., format_error/1) that
returns Cachex.Error.long_form(error) when error is an atom, returns the string
unchanged when error is a binary, and falls back to inspect(error) for other
types, then replace the direct Cachex.Error.long_form(...) calls in the clauses
with format_error(error) (reference symbols: Cachex.Error.long_form/1 and add
format_error/1).
---
Duplicate comments:
In `@lib/sanbase_web/graphql/cache/cachex_provider.ex`:
- Around line 116-118: Remove the dead Process.put call inside the Cachex.fetch
fallback: when matching {:nocache, value} in the fallback branch (the clause
that currently calls Process.put(:do_not_cache_query, true) and returns
{:ignore, {:nocache, value}}), delete the Process.put(:do_not_cache_query, true)
invocation and leave the branch to simply return {:ignore, {:nocache, value}}
because the caller-side handling (the logic around the caller that sets the flag
again on lines ~140–142) already sets the :do_not_cache_query flag in the caller
process.
---
Nitpick comments:
In `@lib/sanbase_web/graphql/cache/cachex_provider_global.ex`:
- Around line 1-5: Public callback implementations in the
SanbaseWeb.Graphql.CachexProviderGlobal module lack `@doc` and `@spec` annotations;
add a short `@doc` string and appropriate `@spec` for each public function that
implements the SanbaseWeb.Graphql.CacheProvider behaviour (match the callback
types defined in SanbaseWeb.Graphql.CacheProvider), e.g., for functions like
get/2, put/3, delete/2, etc., mirror the docs and types used in CachexProvider
to ensure consistency and compile-time type checks.
- Around line 147-151: The decompress functions currently call
:erlang.binary_to_term/1 which can create atoms from untrusted data; update both
decompress_value functions (the local one and CachexProvider.decompress_value)
to call :zlib.gunzip() piped into :erlang.binary_to_term(..., [:safe]) instead
of the single-argument form, preserving the same pipeline and return behavior so
corrupted/tampered data cannot exhaust the atom table.
- Around line 103-128: The case in execute_cache_miss/4 currently matches only
{:ok,_}, {:error,_}, {:nocache,_} and {:middleware,_,_} and will crash on any
other func.() return; add a final fallback clause (e.g. catch_all -> handle
unexpected result by logging or returning an {:error, {:unexpected_cache_result,
catch_all}} tuple) so the function never raises CaseClauseError. Update the
branch to consistently either treat unexpected returns as errors or pass them
through safely (use the same handling approach in the inner case inside
CachexProvider.get_or_store/4 where Cachex.fetch's result is pattern-matched)
and reference cache_item/3 and cache_modify_middleware.(...) paths to preserve
existing behavior for known tuples.
- Around line 29-45: size/1 and count/1 currently pattern-match only on {:ok,
...} from Cachex.inspect/2 and Cachex.size/2 and will crash on {:error, _};
update these functions (size/1 and count/1) to handle both {:ok, value} and
{:error, reason} returns (e.g. return {:ok, rounded_mb} on success and {:error,
reason} on failure) and likewise update clear_all (which calls Cachex.clear/1)
to handle {:error, reason} instead of assuming {:ok, _}; avoid using bare
pattern matches that cause MatchError and return a consistent {:ok, ...} /
{:error, ...} shape.
In `@lib/sanbase_web/graphql/cache/cachex_provider.ex`:
- Around line 176-192: The catch block in ensure_opts_ets/0 includes impossible
patterns; simplify it by removing the redundant clauses and only catching the
actual error raised by :ets.new/2. Locate ensure_opts_ets and its try..catch
around :ets.new(:sanbase_graphql_cachex_opts, ...) and replace the multiple
catch clauses with a single clause that matches :error, :badarg (i.e., catch
:error, :badarg -> :ok) so only the real failure mode is handled and the dead
patterns ({:badarg, _} and %ArgumentError{}) are removed.
- Around line 1-8: Add missing `@doc` and `@spec` annotations for all public
callback functions in SanbaseWeb.Graphql.CachexProvider: start_link/1,
child_spec/1, size/1, count/1, clear_all/1, get/2, store/3, and get_or_store/3.
For each function add a brief `@doc` describing its purpose and an accurate `@spec`
reflecting the expected argument and return types (e.g., start_link(opts ::
keyword()) :: {:ok, pid()} | {:error, term()}, child_spec(opts :: keyword()) ::
Supervisor.child_spec(), size/count/clear_all should accept the cache name/type
and return integer/ok tuples, get(key, cache) and store/3/get_or_store/3 should
use key :: term(), value :: term(), ttl/type as appropriate and return {:ok,
term()} | :not_found | {:error, term()}), matching the CacheProvider behaviour
callbacks so Dialyzer and the behaviour contract are satisfied.
- Around line 58-68: The size/1 and count/1 functions (in module
SanbaseWeb.Graphql.CachexProvider) currently assume Cachex.inspect(cache,
{:memory, :bytes}) and Cachex.size(cache) always return {:ok, _} and will crash
on {:error, _}; update both functions to pattern-match both {:ok, value} and
{:error, reason} results from Cachex.inspect/2 and Cachex.size/2 and return an
explicit {:ok, result} on success (e.g., for size convert bytes to MB and wrap
in {:ok, mb}) and {:error, reason} on failure so callers don’t get a MatchError
when the cache isn’t running.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
lib/sanbase_web/graphql/cache/cachex_provider.exlib/sanbase_web/graphql/cache/cachex_provider_global.ex
69b78d1 to
d9aa55b
Compare
resolved functions. In the same PR we migrate Cachex from 3.x to latest 4.1 version and stop using the private APIs for locking, instead relaying only on Cachex.fetch. The dataloader/3 macro uses the pid of the process as part of some key, but the Cachex.fetch cache-miss function is executed in a different process. So, at the end, the dataloader cannot find the data and returns empty/nil. Fix by using named functions where we explicitly construct the key and call the Dataloader functions.
39352cd to
de4acc8
Compare
de4acc8 to
5ca8061
Compare
Changes
Ticket
Checklist:
Summary by CodeRabbit
Refactor
New Features
Tests
Chore